Training Multi-Layer Perceptron with Enhanced Brain Storm Optimization Metaheuristics
نویسندگان
چکیده
In the domain of artificial neural networks, learning process represents one most challenging tasks. Since classification accuracy highly depends on weights and biases, it is crucial to find its optimal or suboptimal values for problem at hand. However, a very large search space, difficult proper connection biases. Employing traditional optimization algorithms this issue leads slow convergence prone get stuck in local optima. Most commonly, back-propagation used multi-layer-perceptron training can lead vanishing gradient issue. As an alternative approach, stochastic algorithms, such as nature-inspired metaheuristics are more reliable complex tax, finding biases network training. work, we propose enhanced brain storm optimization-based algorithm networks. simulations, ten binary benchmark datasets with different difficulty levels evaluate efficiency proposed algorithm. The results show that approach promising achieved better than other state-of-the-art approaches majority terms speed, due capability balancing intensification diversification avoiding minima. obtained best eight out observed dataset, outperforming all by 1–2% average. When mean observed, dominated nine datasets.
منابع مشابه
Improvement of Multi-Layer Perceptron (MLP) training using optimization algorithms
Artificial Neural Network (ANN) is one of the modern computational methods proposed to solve increasingly complex problems in the real world (Xie et al., 2006 and Chau, 2007). ANN is characterized by its pattern of connections between the neurons (called its architecture), its method of determining the weights on the connections (called its training, or learning, algorithm), and its activation ...
متن کاملImproving Particle Swarm Optimization Based on Neighborhood and Historical Memory for Training Multi-Layer Perceptron
Many optimization problems can be found in scientific and engineering fields. It is a challenge for researchers to design efficient algorithms to solve these optimization problems. The Particle swarm optimization (PSO) algorithm, which is inspired by the social behavior of bird flocks, is a global stochastic method. However, a monotonic and static learning model, which is applied for all partic...
متن کاملMulti-Layer Perceptron with Impulse Glial Network
We have proposed the glial network which was inspired from the feature of brain. In the glial network, glias generate independent oscillations and these oscillations propagated neurons and other glias. We confirmed that the glial network improved the learning performance of the Multi-Layer Perceptron (MLP) In this article, we investigate the MLP with the impulse glial network. The glias have on...
متن کاملMulti-Layer Perceptron with Pulse Glial Chain
Abstract—A glia is a nervous cell in the brain. Currently, the glia is known as a important cell for the human’s cerebration. Because the glia transmits signals to neurons and other glias. We notice features of the glia and consider to apply it for an artificial neural network. In this paper, we propose a Multi-layer perceptron (MLP) with pulse glial chain. The pulse glial chain is inspired fro...
متن کاملAttribute Suppression with Multi-Layer Perceptron
In this paper, we introduce a method that allows to evaluate efficiently the “importance” of each coordinate of the input vector of a neural network. This measurement can be used to obtain informations about the studied data. It can also be used to suppress irrelevant inputs in order to speed up the classification process conducted by the network.
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Computers, materials & continua
سال: 2022
ISSN: ['1546-2218', '1546-2226']
DOI: https://doi.org/10.32604/cmc.2022.020449